首页> 外文OA文献 >Static vs. Dynamic Modeling of Human Nonverbal Behavior from Multiple Cues and Modalities
【2h】

Static vs. Dynamic Modeling of Human Nonverbal Behavior from Multiple Cues and Modalities

机译:从多种线索和模态对人类非语言行为进行静态与动态建模

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Human nonverbal behavior recognition from multiple cues and modalities has attracted a lot of interest in recent years. Despite the interest, many research questions, including the type of feature representation, choice of static vs. dynamic classification schemes, the number and type of cues or modalities to use, and the optimal way of fusing these, remain open research questions. This paper compares frame-based vs window-based feature representation and employs static vs. dynamic classification schemes for two distinct problems in the field of automatic human nonverbal behavior analysis: multicue discrimination between posed and spontaneous smiles from facial expressions, head and shoulder movements, and audio-visual discrimination between laughter and speech. Single cue and single modality results are compared to multicue and multimodal results by employing Neural Networks, Hidden Markov Models (HMMs), and 2- and 3-chain coupled HMMs. Subject independent experimental evaluation shows that: 1) both for static and dynamic classification, fusing data coming from multiple cues and modalities proves useful to the overall task of recognition, 2) the type of feature representation appears to have a direct impact on the classification performance, and 3) static classification is comparable to dynamic classification both for multicue discrimination between posed and spontaneous smiles, and audio-visual discrimination between laughter and speech.
机译:近年来,来自多种暗示和方式的人类非语言行为识别引起了人们的极大兴趣。尽管有兴趣,但许多研究问题,包括特征表示的类型,静态与动态分类方案的选择,要使用的提示或模态的数量和类型以及融合这些问题的最佳方式,仍然是未解决的研究问题。本文比较了基于框架的特征与基于窗口的特征表示,并针对静态人类非语言行为分析领域中的两个不同问题采用了静态与动态分类方案:根据面部表情,头部和肩膀的动作,姿势和自发微笑之间的多线索识别,以及笑声和言语之间的视听歧视。通过使用神经网络,隐马尔可夫模型(HMM)以及2链和3链耦合HMM,将单提示和单模态结果与多提示和多模态结果进行比较。独立于主题的实验评估表明:1)对于静态和动态分类,融合来自多个线索和模态的数据被证明对整体识别任务很有用; 2)特征表示的类型似乎对分类性能有直接影响。 ,以及3)在姿势和自发笑容之间的多线索辨别以及笑与言之间的视听辨别方面,静态分类可与动态分类媲美。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号